Skip to content

adds the e2e coverage collector for the operator using codecov#138

Open
siddhibhor-56 wants to merge 1 commit intoopenshift:mainfrom
siddhibhor-56:codecov
Open

adds the e2e coverage collector for the operator using codecov#138
siddhibhor-56 wants to merge 1 commit intoopenshift:mainfrom
siddhibhor-56:codecov

Conversation

@siddhibhor-56
Copy link
Copy Markdown
Contributor

@siddhibhor-56 siddhibhor-56 commented Apr 12, 2026

Adds the e2e coverage collector for the operator using codecov

Summary by CodeRabbit

  • Chores
    • Added end-to-end coverage support: builds and publishes a coverage-instrumented image and collects coverage data during E2E runs.
    • Added tooling to extract, convert, and optionally upload E2E coverage reports to Codecov (upload is skipped if no token).
    • Improved CI/local workflows for gathering E2E coverage artifacts.

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Apr 12, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: siddhibhor-56
Once this PR has been reviewed and has the lgtm label, please assign mytreya-rh for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 12, 2026

Walkthrough

Adds end-to-end coverage support: a coverage-instrumented Docker image, a multi-stage Dockerfile to build it, a bash orchestration script with setup/collect modes to mount a PVC, swap the operator image, extract coverage data and optionally upload to Codecov, plus Makefile targets/variables to build, push, and collect coverage artifacts.

Changes

E2E Coverage integration

Layer / File(s) Summary
Configuration / Makefile variables
Makefile
Adds COVERAGE_IMG ?= $(IMG)-e2e-coverage and documentation block describing E2E coverage flow.
Build artifact
images/ci/Dockerfile.coverage
New multi-stage Dockerfile: compiles operator with Go coverage flags and produces a minimal runtime image setting GOCOVERDIR=/tmp/e2e-cover.
Core orchestration
hack/e2e-coverage.sh
Adds setup() to create PVC, discover/patch CSV to use coverage image, mount PVC and set GOCOVERDIR, wait for rollout; adds collect() to scale down operator, run extractor pod to copy coverage files, convert covmeta to Go profile, and conditionally upload to Codecov.
Wiring / Makefile targets
Makefile
Adds targets: docker-build-coverage, docker-push-coverage, and e2e-coverage-collect (runs hack/e2e-coverage.sh collect), plus optional ARTIFACT_DIR handling.
Tests / Docs
hack/..., Makefile help text
Makefile help text and script include usage/behavior notes; no test files added.

Sequence Diagram

sequenceDiagram
    participant CI as CI Runner
    participant K8s as Kubernetes API
    participant Operator as Operator Pod
    participant PVC as Coverage PVC
    participant Extractor as Short-lived Extractor Pod
    participant Codecov as Codecov

    CI->>K8s: run hack/e2e-coverage.sh setup (create PVC, patch CSV to use coverage image)
    K8s->>Operator: rollout controller with coverage image, mount PVC, set GOCOVERDIR
    CI->>K8s: run tests against instrumented Operator
    CI->>K8s: run hack/e2e-coverage.sh collect (scale down operator)
    K8s->>PVC: operator writes coverage files to PVC on shutdown
    CI->>K8s: create Extractor Pod that mounts PVC
    Extractor->>PVC: read coverage files
    Extractor-->>CI: copy coverage artifacts to CI workspace
    CI->>CI: convert covmeta -> go coverage profile
    CI->>Codecov: optionally upload profile (if token present)
    Codecov-->>CI: upload response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 10 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Microshift Test Compatibility ⚠️ Warning PR adds Ginkgo e2e tests using operator.openshift.io API which is unavailable on MicroShift. Tests lack [apigroup], [Skipped:MicroShift], or IsMicroShiftCluster() protection. Add [apigroup:operator.openshift.io] tags to Describe blocks, or add IsMicroShiftCluster() guards, or use [Skipped:MicroShift] labels to prevent failures on MicroShift.
✅ Passed checks (10 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'adds the e2e coverage collector for the operator using codecov' directly and clearly summarizes the main changes: adding E2E coverage collection infrastructure with Codecov integration.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed All 18 Ginkgo test titles are static and deterministic with no dynamic values. Dynamic content only appears in test bodies (resource names), not test names, which is correct.
Test Structure And Quality ✅ Passed Tests have proper setup/teardown, all Eventually calls include timeouts, assertions have meaningful messages, single responsibility per test, consistent patterns.
Single Node Openshift (Sno) Test Compatibility ✅ Passed No Ginkgo e2e tests were added. The PR adds Makefile targets, a bash coverage script, and a Dockerfile—all infrastructure changes. SNO compatibility check only applies to new test definitions.
Topology-Aware Scheduling Compatibility ✅ Passed PR adds test infrastructure (Makefile targets, script, Dockerfile) only. No deployment manifests modified, no scheduling constraints introduced. Compatible with all OpenShift topologies.
Ote Binary Stdout Contract ✅ Passed No OTE Binary Stdout Contract violations found. Stdout writes are either to GinkgoWriter (explicitly exempted) or within test utility closures (not process-level code as defined by the check).
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This PR adds only infrastructure/tooling files (Makefile, bash script, Dockerfile) for e2e coverage collection, not Ginkgo tests. The check requires Ginkgo e2e tests to apply and none are present.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@images/ci/Dockerfile`:
- Around line 30-31: The Dockerfile currently uses "RUN mkdir -p /tmp/e2e-cover
&& chmod 777 /tmp/e2e-cover" which creates a world-writable directory; instead
create the directory, chown it to UID/GID 65534 and set restrictive permissions
(e.g. 0700 or 0750) so only the intended user can write; update the RUN that
creates /tmp/e2e-cover to use chown 65534:65534 and chmod 700 (or 750 if group
write is needed) rather than chmod 777.

In `@Makefile`:
- Around line 234-243: The kubectl cp invocation in the Makefile is causing a
nested e2e-cover directory, so change the kubectl cp usage to copy the contents
of /tmp/e2e-cover directly into $(E2E_COVERAGE_DIR); update the kubectl cp line
that references $(KUBECTL) cp "$(E2E_COVERAGE_NAMESPACE)/$$POD:/tmp/e2e-cover"
$(E2E_COVERAGE_DIR) to use a trailing dot on the source (e.g.
"$(E2E_COVERAGE_NAMESPACE)/$$POD:/tmp/e2e-cover/." $(E2E_COVERAGE_DIR)) so
covmeta and covcounters land directly in $(E2E_COVERAGE_DIR) for go tool covdata
to consume.
- Around line 245-248: Replace the unsafe "latest" downloader invocation in the
Makefile (the curl/chmod/./codecov sequence) with a pinned release workflow:
download a specific versioned uploader binary URL (not "latest") and its
corresponding SHA256SUM (or .sha256) file, verify the binary against the
checksum (using sha256sum -c or equivalent) before making it executable and
running it (the ./codecov invocation), and fail the step if verification fails;
keep the existing flags (--file=$(OUTPUTS_PATH)/coverage-e2e.out --flags=e2e
--name="E2E Coverage" --verbose) and still remove the binary afterwards. Ensure
the Makefile references the chosen version string so it is explicit and
reproducible.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9ff32fe0-a38f-4c43-ab77-dfdadbf0a804

📥 Commits

Reviewing files that changed from the base of the PR and between 2a65cb8 and 5670473.

📒 Files selected for processing (2)
  • Makefile
  • images/ci/Dockerfile

Comment thread images/ci/Dockerfile Outdated
Comment on lines +30 to +31
# Create the directory where Go will write coverage data at runtime.
RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid world-writable permissions for /tmp/e2e-cover.

chmod 777 weakens container security unnecessarily. Since the process runs as UID/GID 65534, set ownership and restrictive perms instead.

Suggested hardening
-RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover
+RUN mkdir -p /tmp/e2e-cover && chown 65534:65534 /tmp/e2e-cover && chmod 700 /tmp/e2e-cover

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Create the directory where Go will write coverage data at runtime.
RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover
# Create the directory where Go will write coverage data at runtime.
RUN mkdir -p /tmp/e2e-cover && chown 65534:65534 /tmp/e2e-cover && chmod 700 /tmp/e2e-cover
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@images/ci/Dockerfile` around lines 30 - 31, The Dockerfile currently uses
"RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover" which creates a
world-writable directory; instead create the directory, chown it to UID/GID
65534 and set restrictive permissions (e.g. 0700 or 0750) so only the intended
user can write; update the RUN that creates /tmp/e2e-cover to use chown
65534:65534 and chmod 700 (or 750 if group write is needed) rather than chmod
777.

Comment thread Makefile Outdated
Comment thread Makefile Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (10)
test/utils/bitwarden_resources.go (2)

26-32: Constants are duplicated with aws_resources.go.

The constants externalSecretsAPIVersionV1 and clusterSecretStoreKind are also defined in test/utils/aws_resources.go. Consider extracting these to a shared constants file within the utils package to avoid drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden_resources.go` around lines 26 - 32, The constants
externalSecretsAPIVersionV1 and clusterSecretStoreKind are duplicated across
bitwarden_resources.go and aws_resources.go; extract these into a shared utils
package constants file (e.g., constants.go) and replace the local definitions in
both bitwarden_resources.go and aws_resources.go with references to the new
shared constants (externalSecretsAPIVersionV1, clusterSecretStoreKind), removing
the duplicate declarations so both files import/use the single source of truth.

70-96: Consider handling SetNestedField errors.

While SetNestedField typically only fails if the path is structurally invalid (which won't happen with these compile-time known paths), explicitly checking the error would make the code more defensive.

-	_ = unstructured.SetNestedField(u.Object, []interface{}{
+	if err := unstructured.SetNestedField(u.Object, []interface{}{
 		map[string]interface{}{
 			"secretKey": "value",
 			"remoteRef": map[string]interface{}{
 				"key": remoteKey,
 			},
 		},
-	}, "spec", "data")
+	}, "spec", "data"); err != nil {
+		panic(fmt.Sprintf("failed to set spec.data: %v", err))
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden_resources.go` around lines 70 - 96, The SetNestedField
calls in BitwardenExternalSecretByName and BitwardenExternalSecretByUUID ignore
returned errors; update both functions to capture the error from
unstructured.SetNestedField and handle it (e.g., if err != nil {
panic(fmt.Errorf("failed to set spec.data for %s: %w", name, err)) }) so
failures are surfaced; reference the SetNestedField call sites in
BitwardenExternalSecretByName and BitwardenExternalSecretByUUID and include the
original error text in the panic/log message for diagnosability.
test/utils/bitwarden_api_runner.go (1)

35-38: Consider pinning the curl image tag for reproducibility.

Using docker.io/curlimages/curl:latest can lead to non-reproducible test behavior if the image is updated with breaking changes. Consider pinning to a specific version.

 const (
-	bitwardenAPITestRunnerImage = "docker.io/curlimages/curl:latest"
+	bitwardenAPITestRunnerImage = "docker.io/curlimages/curl:8.7.1"
 	bitwardenCredMountPath      = "/etc/bitwarden-cred"
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden_api_runner.go` around lines 35 - 38, The constant
bitwardenAPITestRunnerImage currently uses the mutable tag
"docker.io/curlimages/curl:latest" which can cause flaky tests; change the value
of bitwardenAPITestRunnerImage to a specific, immutable curlimages tag (e.g., a
known semver like :8.x.y) and add a short comment noting why the tag is pinned;
ensure any CI/test docs referencing this constant are updated when you
intentionally upgrade the pinned tag.
Makefile (2)

182-182: Add PHONY declaration for the targets on this line.

The static analysis tool flagged that targets listed here should be declared as .PHONY to ensure they're always executed regardless of whether a file with that name exists.

+.PHONY: fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep
 fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep: GOFLAGS=
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` at line 182, The Makefile line defining the combined targets "fmt
vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep:
GOFLAGS=" lacks a .PHONY declaration, so add a .PHONY rule listing these exact
target names (fmt vet test test-unit test-e2e e2e-coverage-publish run
update-vendor update-dep) elsewhere in the Makefile to ensure they always run
regardless of files with those names; update the .PHONY block to include all
these targets.

219-222: Codecov uploader pinning and SHA256 verification are correct.

The implementation correctly pins v0.7.6 and verifies the SHA256 checksum before execution. v0.7.6 is a valid release. However, v0.8.0 is now the latest available version. If this version is not pinned for specific compatibility reasons, consider upgrading to v0.8.0 to benefit from any bug fixes or security patches.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 219 - 222, Update the pinned Codecov uploader by
changing the CODECOV_VERSION variable from v0.7.6 to v0.8.0 and adjust the
derived URLs (CODECOV_URL and CODECOV_SHA_URL) will automatically follow; if
there's a deliberate compatibility constraint, add a brief comment next to
CODECOV_VERSION explaining why v0.7.6 must remain pinned instead of upgrading to
v0.8.0.
test/utils/bitwarden.go (2)

182-201: Consider validating organization and project IDs.

FetchBitwardenCredsFromSecret validates that token is non-empty but silently returns empty strings for orgID and projectID if they're missing. If these are required for Bitwarden API operations, consider validating them as well.

 	if v, ok := secret.Data[BitwardenCredKeyOrgID]; ok {
 		orgID = string(v)
+	} else {
+		return "", "", "", fmt.Errorf("bitwarden creds secret %s/%s missing required key %q", secretNamespace, secretName, BitwardenCredKeyOrgID)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden.go` around lines 182 - 201,
FetchBitwardenCredsFromSecret currently only enforces token presence but
silently allows orgID and projectID to be empty; update the function to validate
BitwardenCredKeyOrgID and BitwardenCredKeyProjectID like TokenSecretKey and
return a descriptive error if either is missing or empty (use the same error
formatting pattern as the existing token check, referencing
secretNamespace/secretName and the missing key names) so callers receive a clear
failure when required organization or project IDs are absent.

326-341: Permissive error handling may mask connectivity issues.

The reachability check treats http_code=000 (connection refused/timeout/TLS failure) and empty logs as success to avoid failing the suite. While this is pragmatic, it could mask real connectivity problems that would cause subsequent tests to fail with less informative errors.

Consider at minimum logging a warning when these permissive paths are taken, so operators can investigate if tests fail later.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden.go` around lines 326 - 341, In the PodFailed case of the
reachability check (the switch handling corev1.PodFailed), add a warning log
before returning true on the permissive paths so connectivity issues aren't
silently ignored: when getPodLogs(...) contains "http_code=000" or when logs
contain "error on the server" or are empty/whitespace, call the test/logger
warning function to record the pod/container identity (use
formatPodContainerStatus(p) and podName/BitwardenOperandNamespace) and the raw
logs, then return true,nil as before.
test/utils/dynamic_resources.go (1)

112-127: Input mutation may surprise callers.

getResourceInterface mutates the input unstructuredObj by calling SetNamespace (Line 121). This side effect could be unexpected for callers who pass an object they intend to reuse. Consider documenting this behavior or working with a copy.

 func (d DynamicResourceLoader) getResourceInterface(unstructuredObj *unstructured.Unstructured, overrideNamespace string) dynamic.ResourceInterface {
 	gvk := unstructuredObj.GroupVersionKind()
 	gr, err := restmapper.GetAPIGroupResources(d.KubeClient.Discovery())
 	require.NoError(d.t, err)
 	mapper := restmapper.NewDiscoveryRESTMapper(gr)
 	mapping, err := mapper.RESTMapping(gvk.GroupKind(), gvk.Version)
 	require.NoError(d.t, err)
 	if mapping.Scope.Name() == meta.RESTScopeNameNamespace {
 		if overrideNamespace != "" {
-			unstructuredObj.SetNamespace(overrideNamespace)
+			unstructuredObj.SetNamespace(overrideNamespace) // Note: mutates input object
 		}
 		require.NotEmpty(d.t, unstructuredObj.GetNamespace(), "Namespace can not be empty for namespaced resource")
 		return d.DynamicClient.Resource(mapping.Resource).Namespace(unstructuredObj.GetNamespace())
 	}
 	return d.DynamicClient.Resource(mapping.Resource)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/dynamic_resources.go` around lines 112 - 127, The
getResourceInterface function currently mutates the caller's unstructuredObj by
calling SetNamespace on it when applying overrideNamespace; avoid surprising
side-effects by operating on a copy: create a deep copy of unstructuredObj
(e.g., via unstructuredObj.DeepCopy()), set the namespace on the copy rather
than the original, and use the copy's GetNamespace when selecting the Resource
Interface; alternatively, clearly document that getResourceInterface will mutate
unstructuredObj if overrideNamespace is provided. Ensure references to
SetNamespace, unstructuredObj, overrideNamespace and getResourceInterface are
updated accordingly.
test/utils/artifact_dump.go (1)

171-177: Consider pre-compiling the regex for sanitizeFilename.

The regex is compiled on every call. While acceptable for e2e test utilities where this runs infrequently, pre-compiling at package level would be slightly more efficient.

♻️ Optional: Pre-compile regex
+var sanitizeFilenameRe = regexp.MustCompile(`[^a-zA-Z0-9_-]`)
+
 func sanitizeFilename(s string) string {
-	s = regexp.MustCompile(`[^a-zA-Z0-9_-]`).ReplaceAllString(s, "_")
+	s = sanitizeFilenameRe.ReplaceAllString(s, "_")
 	if len(s) > 100 {
 		s = s[:100]
 	}
 	return s
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/artifact_dump.go` around lines 171 - 177, The sanitizeFilename
function currently compiles the regex on every call; move the
regexp.MustCompile(`[^a-zA-Z0-9_-]`) to a package-level variable (e.g.,
filenameSanitizeRegex) and update sanitizeFilename to use
filenameSanitizeRegex.ReplaceAllString(s, "_") so the regex is compiled once and
behavior (truncation to 100 chars) remains unchanged.
test/e2e/bitwarden_api_test.go (1)

139-144: Consider documenting why both 200 and 400 are acceptable.

The test accepts either HTTP 200 or 400 for empty ids list. Adding a brief comment explaining the expected API behavior variance would improve clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/bitwarden_api_test.go` around lines 139 - 144, Add a short comment
above the It("GET /rest/api/1/secrets-by-ids with empty ids should return 200 or
400", ...) test explaining why both HTTP 200 and 400 are accepted (e.g.,
different implementations may treat an empty ids list as a valid no-op returning
200 or as a client error returning 400), so future readers of the test (and
maintainers of runAPIJob/script and the API contract) understand the intended
variance; reference the test name, the script variable that posts '{"ids":[]}',
and runAPIJob to locate the exact spot to insert the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@Makefile`:
- Line 182: The Makefile line defining the combined targets "fmt vet test
test-unit test-e2e e2e-coverage-publish run update-vendor update-dep: GOFLAGS="
lacks a .PHONY declaration, so add a .PHONY rule listing these exact target
names (fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor
update-dep) elsewhere in the Makefile to ensure they always run regardless of
files with those names; update the .PHONY block to include all these targets.
- Around line 219-222: Update the pinned Codecov uploader by changing the
CODECOV_VERSION variable from v0.7.6 to v0.8.0 and adjust the derived URLs
(CODECOV_URL and CODECOV_SHA_URL) will automatically follow; if there's a
deliberate compatibility constraint, add a brief comment next to CODECOV_VERSION
explaining why v0.7.6 must remain pinned instead of upgrading to v0.8.0.

In `@test/e2e/bitwarden_api_test.go`:
- Around line 139-144: Add a short comment above the It("GET
/rest/api/1/secrets-by-ids with empty ids should return 200 or 400", ...) test
explaining why both HTTP 200 and 400 are accepted (e.g., different
implementations may treat an empty ids list as a valid no-op returning 200 or as
a client error returning 400), so future readers of the test (and maintainers of
runAPIJob/script and the API contract) understand the intended variance;
reference the test name, the script variable that posts '{"ids":[]}', and
runAPIJob to locate the exact spot to insert the comment.

In `@test/utils/artifact_dump.go`:
- Around line 171-177: The sanitizeFilename function currently compiles the
regex on every call; move the regexp.MustCompile(`[^a-zA-Z0-9_-]`) to a
package-level variable (e.g., filenameSanitizeRegex) and update sanitizeFilename
to use filenameSanitizeRegex.ReplaceAllString(s, "_") so the regex is compiled
once and behavior (truncation to 100 chars) remains unchanged.

In `@test/utils/bitwarden_api_runner.go`:
- Around line 35-38: The constant bitwardenAPITestRunnerImage currently uses the
mutable tag "docker.io/curlimages/curl:latest" which can cause flaky tests;
change the value of bitwardenAPITestRunnerImage to a specific, immutable
curlimages tag (e.g., a known semver like :8.x.y) and add a short comment noting
why the tag is pinned; ensure any CI/test docs referencing this constant are
updated when you intentionally upgrade the pinned tag.

In `@test/utils/bitwarden_resources.go`:
- Around line 26-32: The constants externalSecretsAPIVersionV1 and
clusterSecretStoreKind are duplicated across bitwarden_resources.go and
aws_resources.go; extract these into a shared utils package constants file
(e.g., constants.go) and replace the local definitions in both
bitwarden_resources.go and aws_resources.go with references to the new shared
constants (externalSecretsAPIVersionV1, clusterSecretStoreKind), removing the
duplicate declarations so both files import/use the single source of truth.
- Around line 70-96: The SetNestedField calls in BitwardenExternalSecretByName
and BitwardenExternalSecretByUUID ignore returned errors; update both functions
to capture the error from unstructured.SetNestedField and handle it (e.g., if
err != nil { panic(fmt.Errorf("failed to set spec.data for %s: %w", name, err))
}) so failures are surfaced; reference the SetNestedField call sites in
BitwardenExternalSecretByName and BitwardenExternalSecretByUUID and include the
original error text in the panic/log message for diagnosability.

In `@test/utils/bitwarden.go`:
- Around line 182-201: FetchBitwardenCredsFromSecret currently only enforces
token presence but silently allows orgID and projectID to be empty; update the
function to validate BitwardenCredKeyOrgID and BitwardenCredKeyProjectID like
TokenSecretKey and return a descriptive error if either is missing or empty (use
the same error formatting pattern as the existing token check, referencing
secretNamespace/secretName and the missing key names) so callers receive a clear
failure when required organization or project IDs are absent.
- Around line 326-341: In the PodFailed case of the reachability check (the
switch handling corev1.PodFailed), add a warning log before returning true on
the permissive paths so connectivity issues aren't silently ignored: when
getPodLogs(...) contains "http_code=000" or when logs contain "error on the
server" or are empty/whitespace, call the test/logger warning function to record
the pod/container identity (use formatPodContainerStatus(p) and
podName/BitwardenOperandNamespace) and the raw logs, then return true,nil as
before.

In `@test/utils/dynamic_resources.go`:
- Around line 112-127: The getResourceInterface function currently mutates the
caller's unstructuredObj by calling SetNamespace on it when applying
overrideNamespace; avoid surprising side-effects by operating on a copy: create
a deep copy of unstructuredObj (e.g., via unstructuredObj.DeepCopy()), set the
namespace on the copy rather than the original, and use the copy's GetNamespace
when selecting the Resource Interface; alternatively, clearly document that
getResourceInterface will mutate unstructuredObj if overrideNamespace is
provided. Ensure references to SetNamespace, unstructuredObj, overrideNamespace
and getResourceInterface are updated accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: a9d37d64-68e3-464b-9b91-29402719f719

📥 Commits

Reviewing files that changed from the base of the PR and between 5670473 and 0c0adfd.

📒 Files selected for processing (20)
  • Makefile
  • README.md
  • hack/govulncheck.sh
  • images/ci/Dockerfile.codecoverage
  • test/README.md
  • test/apis/README.md
  • test/e2e/README.md
  • test/e2e/bitwarden_api_test.go
  • test/e2e/bitwarden_es_test.go
  • test/e2e/e2e_suite_test.go
  • test/e2e/e2e_test.go
  • test/go.mod
  • test/utils/artifact_dump.go
  • test/utils/aws_resources.go
  • test/utils/bitwarden.go
  • test/utils/bitwarden_api_runner.go
  • test/utils/bitwarden_resources.go
  • test/utils/cleanup.go
  • test/utils/conditions.go
  • test/utils/dynamic_resources.go
✅ Files skipped from review due to trivial changes (6)
  • README.md
  • test/README.md
  • test/apis/README.md
  • hack/govulncheck.sh
  • test/go.mod
  • test/e2e/README.md

@siddhibhor-56 siddhibhor-56 force-pushed the codecov branch 3 times, most recently from 3604751 to 8f666be Compare May 6, 2026 18:23
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@hack/e2e-coverage.sh`:
- Around line 97-118: The collect() function can leave the coverage-extractor
pod running if a later command fails; add a cleanup trap so the pod is always
removed on EXIT. Implement a small cleanup function (e.g., cleanup_extractor)
that runs oc delete pod coverage-extractor -n "${NAMESPACE}" --ignore-not-found
--wait=false 2>/dev/null || true, set trap 'cleanup_extractor' EXIT immediately
after creating the pod (right after the oc run/oc wait sequence), and
remove/reset the trap (trap - EXIT) once files are successfully copied and
before returning so normal completion doesn't trigger redundant cleanup.
- Around line 58-63: The CSV patch currently uses append ops to add env,
volumeMounts and volumes which will create duplicates if run twice; modify the
oc patch logic around oc patch "${csv}" to first check for an existing env entry
named GOCOVERDIR (or existing volume/volumeMount named coverage-data) and skip
the corresponding add ops if present, or alternatively replace the whole
container spec in
/spec/install/spec/deployments/0/spec/template/spec/containers/0 with a
sanitized container object that sets image to COVERAGE_IMAGE and sets env
GOCOVERDIR to GOCOVERDIR_PATH and the volumeMounts/volumes (using PVC_NAME),
ensuring you update the patch operations to use replace on the container array
or conditional logic to avoid appending duplicates.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: a6f2a1e6-94ca-4808-93c6-5d2746b62eb7

📥 Commits

Reviewing files that changed from the base of the PR and between 8f666be and a467b24.

📒 Files selected for processing (3)
  • Makefile
  • hack/e2e-coverage.sh
  • images/ci/Dockerfile.coverage

Comment thread hack/e2e-coverage.sh
Comment on lines +58 to +63
oc patch csv "${csv}" -n "${NAMESPACE}" --type=json -p "[
{\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image\", \"value\": \"${COVERAGE_IMAGE}\"},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/-\", \"value\": {\"name\": \"GOCOVERDIR\", \"value\": \"${GOCOVERDIR_PATH}\"}},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/volumeMounts/-\", \"value\": {\"name\": \"coverage-data\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/volumes/-\", \"value\": {\"name\": \"coverage-data\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}}
]"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify non-idempotent append operations are present in the setup patch.
rg -n '"/spec/install/spec/deployments/0/spec/template/spec/containers/0/(env|volumeMounts)/-"|"/spec/install/spec/deployments/0/spec/template/spec/volumes/-"' hack/e2e-coverage.sh

Repository: openshift/external-secrets-operator

Length of output: 61


🏁 Script executed:

cat -n hack/e2e-coverage.sh | sed -n '50,70p'

Repository: openshift/external-secrets-operator

Length of output: 1660


🏁 Script executed:

# Find the setup function and its callers
grep -n "setup()" hack/e2e-coverage.sh | head -20

Repository: openshift/external-secrets-operator

Length of output: 92


🏁 Script executed:

# Search for main and overall flow
sed -n '1,100p' hack/e2e-coverage.sh

Repository: openshift/external-secrets-operator

Length of output: 4211


🏁 Script executed:

# View the end of the file to see script invocation logic
tail -50 hack/e2e-coverage.sh

Repository: openshift/external-secrets-operator

Length of output: 2209


Guard CSV patching against duplicate entries when re-running setup.

Lines 59–62 use append operations (/- suffix) to add env, volumeMounts, and volumes to the pod spec. Running setup twice will duplicate these entries, breaking the pod spec with duplicate environment variables and volume mounts. Add a check for the existing GOCOVERDIR entry before patching, or use a replace operation on the entire container array instead of appending.

Suggested approach
+    local already_patched
+    already_patched="$(oc get csv "${csv}" -n "${NAMESPACE}" -o jsonpath='{.spec.install.spec.deployments[0].spec.template.spec.containers[0].env[?(@.name=="GOCOVERDIR")].name}' 2>/dev/null || true)"
+    if [[ -n "${already_patched}" ]]; then
+        echo "CSV already patched for coverage; skipping patch step."
+    else
     oc patch csv "${csv}" -n "${NAMESPACE}" --type=json -p "[
         {\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image\", \"value\": \"${COVERAGE_IMAGE}\"},
         {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/-\", \"value\": {\"name\": \"GOCOVERDIR\", \"value\": \"${GOCOVERDIR_PATH}\"}},
         {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/volumeMounts/-\", \"value\": {\"name\": \"coverage-data\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}},
         {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/volumes/-\", \"value\": {\"name\": \"coverage-data\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}}
     ]"
+    fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
oc patch csv "${csv}" -n "${NAMESPACE}" --type=json -p "[
{\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image\", \"value\": \"${COVERAGE_IMAGE}\"},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/-\", \"value\": {\"name\": \"GOCOVERDIR\", \"value\": \"${GOCOVERDIR_PATH}\"}},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/volumeMounts/-\", \"value\": {\"name\": \"coverage-data\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/volumes/-\", \"value\": {\"name\": \"coverage-data\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}}
]"
local already_patched
already_patched="$(oc get csv "${csv}" -n "${NAMESPACE}" -o jsonpath='{.spec.install.spec.deployments[0].spec.template.spec.containers[0].env[?(@.name=="GOCOVERDIR")].name}' 2>/dev/null || true)"
if [[ -n "${already_patched}" ]]; then
echo "CSV already patched for coverage; skipping patch step."
else
oc patch csv "${csv}" -n "${NAMESPACE}" --type=json -p "[
{\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image\", \"value\": \"${COVERAGE_IMAGE}\"},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/-\", \"value\": {\"name\": \"GOCOVERDIR\", \"value\": \"${GOCOVERDIR_PATH}\"}},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/volumeMounts/-\", \"value\": {\"name\": \"coverage-data\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/volumes/-\", \"value\": {\"name\": \"coverage-data\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}}
]"
fi
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/e2e-coverage.sh` around lines 58 - 63, The CSV patch currently uses
append ops to add env, volumeMounts and volumes which will create duplicates if
run twice; modify the oc patch logic around oc patch "${csv}" to first check for
an existing env entry named GOCOVERDIR (or existing volume/volumeMount named
coverage-data) and skip the corresponding add ops if present, or alternatively
replace the whole container spec in
/spec/install/spec/deployments/0/spec/template/spec/containers/0 with a
sanitized container object that sets image to COVERAGE_IMAGE and sets env
GOCOVERDIR to GOCOVERDIR_PATH and the volumeMounts/volumes (using PVC_NAME),
ensuring you update the patch operations to use replace on the container array
or conditional logic to avoid appending duplicates.

Comment thread hack/e2e-coverage.sh
Comment on lines +97 to +118
echo "Creating extractor pod to access PVC data..."
oc run coverage-extractor \
--image="${EXTRACTOR_IMAGE}" \
--restart=Never \
--overrides="{
\"spec\": {
\"volumes\": [{\"name\": \"cov\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}],
\"containers\": [{\"name\": \"coverage-extractor\", \"image\": \"${EXTRACTOR_IMAGE}\",
\"command\": [\"sleep\", \"600\"],
\"volumeMounts\": [{\"name\": \"cov\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}]
}]
}
}" \
-n "${NAMESPACE}"

oc wait pod/coverage-extractor --for=condition=Ready \
-n "${NAMESPACE}" --timeout=120s

# Use /. suffix so oc cp places files directly in coverage_dir, not nested
mkdir -p "${coverage_dir}"
oc cp "${NAMESPACE}/coverage-extractor:${GOCOVERDIR_PATH}/." "${coverage_dir}"
oc delete pod coverage-extractor -n "${NAMESPACE}" --ignore-not-found --wait=false 2>/dev/null || true
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Always clean up coverage-extractor via trap on failure paths.

If oc cp (Line 117) or any later command fails, set -e exits before the explicit delete at Line 118, leaving the extractor pod behind. Add an EXIT trap in collect() so cleanup is guaranteed.

Suggested hardening
 collect() {
     echo "--- E2E Coverage Collection ---"
+    cleanup_extractor() {
+        oc delete pod coverage-extractor -n "${NAMESPACE}" --ignore-not-found --wait=false 2>/dev/null || true
+    }
+    trap cleanup_extractor EXIT
@@
-    oc delete pod coverage-extractor -n "${NAMESPACE}" --ignore-not-found --wait=false 2>/dev/null || true
+    cleanup_extractor
+    trap - EXIT
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/e2e-coverage.sh` around lines 97 - 118, The collect() function can leave
the coverage-extractor pod running if a later command fails; add a cleanup trap
so the pod is always removed on EXIT. Implement a small cleanup function (e.g.,
cleanup_extractor) that runs oc delete pod coverage-extractor -n "${NAMESPACE}"
--ignore-not-found --wait=false 2>/dev/null || true, set trap
'cleanup_extractor' EXIT immediately after creating the pod (right after the oc
run/oc wait sequence), and remove/reset the trap (trap - EXIT) once files are
successfully copied and before returning so normal completion doesn't trigger
redundant cleanup.

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented May 10, 2026

@siddhibhor-56: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/verify a467b24 link true /test verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant